[Amazon FSx for NetApp ONTAP] SnapMirrorの転送先がTiering Policy Allの場合においても転送元でInactive data compressionを「わざわざ」かけるメリットはあるのか考察してみた

[Amazon FSx for NetApp ONTAP] SnapMirrorの転送先がTiering Policy Allの場合においても転送元でInactive data compressionを「わざわざ」かけるメリットはあるのか考察してみた

SnapMirrorの転送先がTiering Policy Allの場合においても転送元でInactive data compressionを「わざわざ」かけるメリットはある
Clock Icon2024.01.24

SnapMirrorの転送先がTiering Policy Allの場合においても転送元でInactive data compressionをかけるメリットはあるのか

こんにちは、のんピ(@non____97)です。

皆さんはSnapMirrorの転送先のTiering Policy Allの場合においても転送先のストレージコストを抑えるために転送元でInactive data compressionをかけるメリットはあるのか気になったことはありますか? 私はあります。

ポイントは「わざわざ」Inactive data compressionを転送元で実行するメリットがあるかです。

以下記事で検証しているとおり、SSD上で圧縮がかかっているかどうかに関わらずキャパシティプールストレージに階層化するタイミングで別途圧縮がかかります。

そのため、「キャパシティプールストレージ上の物理データ使用量削減のため」にSnapMirrorの転送元でInactive data compressionを無理に実行する必要性はないと考えます。

それでは、転送先がTiering Policy Allの場合においても転送元でInactive data compressionをかけるメリットはないのでしょうか。

上述以外に何かメリットがあるのか気になってきました。

検証のポイントは以下になります。

  1. そもそも、Inactive data compressionのデータ削減効果を維持した状態でSnapMirrorで転送できるか
  2. 転送後、キャパシティプールストレージに階層化されたデータをSSDに書き戻す際に圧縮した状態で書き戻せられるか

1つ目のについては以下記事でも確認しています。以下記事では重複排除も並行して行なっています。今回はより分かりやすくするため、重複排除はかけずにInactive data compressionのみをかけた状態でSnapMirrorで転送を行います。

「キャパシティプールストレージに階層化されたデータをSSDに書き戻す際に圧縮した状態で書き戻せられるか」は以下記事でも確認しています。検証結果としては「圧縮した状態で書き戻せられる」「圧縮した状態で描き戻せられない」になります。今回はそれに加えて「SnapMirrorの転送元でInactivee data compressionされたデータの場合」という条件を付け加えた形になります。

実際にやってみました。

いきなりまとめ

  • SnapMirrorの転送先がTiering Policy Allの場合においても転送元でInactive data compressionを実行するメリットはある
    • SnapMirrorはInactive data compressionのデータ削減効果を維持した状態で転送できる
    • Tiering PolicyがAllであっても一度SSDに書き込まれる
    • SSDを圧迫しないように、余裕を持ってSSDのサイズをプロビジョニングしたり、帯域制御を操作したりすることが必要
    • Inactive data compressionを実行することで少ないSSDでも捌けられる量が増える
    • 結果としてSSDのプロビジョニングするサイズを抑えることにつながる
  • SnapMirrorの転送量自体が少なくなるため、転送時間および転送量にかかる料金を抑えるという点においてもメリットがある
    • ただし、Inactive data compressionは1つのファイルシステムに同時に1ボリュームしか実行できないため、トータルの移行作業の時間削減にプラスの影響があるかは要検討
  • SnapMirrorで転送されたデータをキャパシティプールストレージに階層化してからSSDに書き戻すと、転送元でかかっていた圧縮によるデータ削減効果が失われるのは注意が必要
    • キャパシティプールストレージからSSDに書き戻すと、SSDの物理データ使用量がキャパシティプールストレージの物理データ使用量よりも増加してしまう
  • 「キャパシティプールストレージ上の物理データ使用量削減のため」にSnapMirrorの転送元でInactive data compressionを無理に実行する必要性はない
    • キャパシティプールストレージ上で別途圧縮がかかるため
  • Inactive data compressionでスキャン済みのデータは、Inactive data compression再実行時に再スキャンされない

やってみた

検証環境

検証環境は以下のとおりです。

Amazon FSx for NetApp ONTAPにおけるSnapMirrorの同時転送数の上限を確認してみた検証環境構成図

SnapMirrorの転送元FSxN(以降FSxN 1)は以下記事で使った環境を再利用します。

SnapMirrorの転送先FSxN(以降FSxN 2)のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId05f23d527aa7ad7a7-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 136KB
                               Total Physical Used: 288KB
                    Total Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used Without Snapshots: 136KB
Total Data Reduction Physical Used Without Snapshots: 288KB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones: 136KB
Total Data Reduction Physical Used without snapshots and flexclones: 288KB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 616KB
Total Physical Used in FabricPool Performance Tier: 2.64MB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 616KB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 2.64MB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 136KB
               Physical Space Used for All Volumes: 136KB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 288KB
              Physical Space Used by the Aggregate: 288KB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 0B
             Physical Size Used by Snapshot Copies: 0B
              Snapshot Volume Data Reduction Ratio: 1.00:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     860.6GB   861.8GB 1.12GB   21.32MB       0%                    0B                          0%                                  0B                   0B            0B              0%                      0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              1.12GB         0%
      Aggregate Metadata                             1.98MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    46.47GB         5%

      Total Physical Used                           21.32MB         0%


      Total Provisioned Space                          65GB         7%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

クラスターピアリングの作成

まず、クラスターピアリングをします。

その前にFSxNファイルシステムのLIFのIPアドレスを確認します。

::*> cluster identity show

          Cluster UUID: 52caefe1-b747-11ee-be16-5542f946a54e
          Cluster Name: FsxId012f5aba611482f32
 Cluster Serial Number: 1-80-000011
      Cluster Location:
       Cluster Contact:
              RDB UUID: 52cb835d-b747-11ee-be16-5542f946a54e

::*> network interface show -service-policy default-intercluster
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
FsxId012f5aba611482f32
            inter_1      up/up    10.0.8.36/24       FsxId012f5aba611482f32-01
                                                                   e0e     true
            inter_2      up/up    10.0.8.230/24      FsxId012f5aba611482f32-02
                                                                   e0e     true
2 entries were displayed.

::*> cluster identity show

          Cluster UUID: 89fa46a2-b750-11ee-b8c0-db82abb5ccf5
          Cluster Name: FsxId05f23d527aa7ad7a7
 Cluster Serial Number: 1-80-000011
      Cluster Location:
       Cluster Contact:
              RDB UUID: 89fae596-b750-11ee-b8c0-db82abb5ccf5

7::*> network interface show -service-policy default-intercluster
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
FsxId05f23d527aa7ad7a7
            inter_1      up/up    10.0.8.179/24      FsxId05f23d527aa7ad7a7-01
                                                                   e0e     true
            inter_2      up/up    10.0.8.95/24       FsxId05f23d527aa7ad7a7-02
                                                                   e0e     true
2 entries were displayed.

FSxN 2からクラスターピアリングを作成します。

::*> cluster peer create -peer-addrs 10.0.8.36 10.0.8.230

Notice: Use a generated passphrase or choose a passphrase of 8 or more characters. To ensure the authenticity of the peering relationship, use a phrase or sequence of
        characters that would be hard to guess.

Enter the passphrase:
Confirm the passphrase:

Notice: Now use the same passphrase in the "cluster peer create" command in the other cluster.

FSxN 1からもクラスターピアリングを作成します。

::*> cluster peer create -peer-addrs 10.0.8.179 10.0.8.95

Notice: Use a generated passphrase or choose a passphrase of 8 or more characters. To ensure the authenticity of the peering relationship, use a phrase or sequence of
        characters that would be hard to guess.

Enter the passphrase:
Confirm the passphrase:

::*> cluster peer show
Peer Cluster Name         Cluster Serial Number Availability   Authentication
------------------------- --------------------- -------------- --------------
FsxId05f23d527aa7ad7a7    1-80-000011           Available      ok

クラスターピアリングできましたね。

SVMピアリングの作成

続いて、SVMピアリングを行います。

FSxN 1からSVMピアリングを作成します。

::*> vserver peer create -vserver svm -peer-vserver svm2 -applications snapmirror -peer-cluster FsxId05f23d527aa7ad7a7

Info: [Job 45] 'vserver peer create' job queued

FSxN 2側でSVMピアリングを承認します。

::*> vserver peer show-all
            Peer        Peer                           Peering        Remote
Vserver     Vserver     State        Peer Cluster      Applications   Vserver
----------- ----------- ------------ ----------------- -------------- ---------
svm2        svm         pending      FsxId012f5aba611482f32
                                                       snapmirror     svm

::*> vserver peer accept -vserver svm2 -peer-vserver svm

Info: [Job 45] 'vserver peer accept' job queued

::*> vserver peer show-all
            Peer        Peer                           Peering        Remote
Vserver     Vserver     State        Peer Cluster      Applications   Vserver
----------- ----------- ------------ ----------------- -------------- ---------
svm2        svm         peered       FsxId012f5aba611482f32
                                                       snapmirror     svm

SnapMirrorの初期転送

SnapMirrorの初期転送を行います。

転送後の物理使用量を確認したいため、転送先のボリュームのTiering Policy Noneにしておきます。

::*> snapmirror protect -path-list svm:vol1 -destination-vserver svm2 -policy MirrorAllSnapshots -auto-initialize true -support-tiering true -tiering-policy none
[Job 46] Job is queued: snapmirror protect for list of source endpoints beginning with "svm:vol1".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Uninitialized
                                      Transferring   0B        true    01/20 05:16:48

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Uninitialized
                                      Transferring   5.01GB    true    01/20 05:17:43

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Uninitialized
                                      Transferring   6.27GB    true    01/20 05:17:59

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Uninitialized
                                      Finalizing     9.36GB    true    01/20 05:18:46

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -

::*> snapmirror show -instance

                                  Source Path: svm:vol1
                               Source Cluster: -
                               Source Vserver: svm
                                Source Volume: vol1
                             Destination Path: svm2:vol1_dst
                          Destination Cluster: -
                          Destination Vserver: svm2
                           Destination Volume: vol1_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm2
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                    Newest Snapshot Timestamp: 01/20 05:16:48
                            Exported Snapshot: snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                  Exported Snapshot Timestamp: 01/20 05:16:48
                                      Healthy: true
                              Relationship ID: 1c2fbc48-b753-11ee-9c61-cd186ff61130
                          Source Vserver UUID: 95c966c9-b748-11ee-be16-5542f946a54e
                     Destination Vserver UUID: a809406f-b751-11ee-b8c0-db82abb5ccf5
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 0B
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:0:0
                           Last Transfer From: svm:vol1
                  Last Transfer End Timestamp: 01/20 05:18:57
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:2:31
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId05f23d527aa7ad7a7-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 1
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 10053200014
               Total Transfer Time in Seconds: 130
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

9.36GBでFinalizingされています。

初期転送完了後のFSxN 2のStorage Efficiency、ボリューム、aggregate、Snapshotの情報は以下のとおりです。

::*> volume efficiency show -volume vol1_dst -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume   state    policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- -------- -------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm2    vol1_dst Disabled -      false       true               efficient               false         true            true                              true         false

::*> volume efficiency show -volume vol1_dst -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume   state    progress          last-op-begin last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- -------- -------- ----------------- ------------- ------------------------ ------------ --------------- -------------- -----------------
svm2    vol1_dst Disabled Idle for 00:00:00 -             Sat Jan 20 05:19:56 2024 0B           0%              0B             0B

::*> volume show -volume vol1_dst -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available,physical-used, physical-used-percent
vserver volume   size    available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- -------- ------- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm2    vol1_dst 57.65GB 6.35GB    57.65GB         54.77GB 48.42GB 88%          0B                 0%                         48GB                48.74GB       85%                   48.42GB      88%                  -                 48.42GB             0B                                  0%

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           48.74GB       5%
             Footprint in Performance Tier            48.84GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        324.4MB       0%
      Delayed Frees                                   106.3MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 49.16GB       5%

      Effective Total Footprint                       49.16GB       5%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId05f23d527aa7ad7a7-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 97.12GB
                               Total Physical Used: 9.38GB
                    Total Storage Efficiency Ratio: 10.35:1
Total Data Reduction Logical Used Without Snapshots: 48.38GB
Total Data Reduction Physical Used Without Snapshots: 9.32GB
Total Data Reduction Efficiency Ratio Without Snapshots: 5.19:1
Total Data Reduction Logical Used without snapshots and flexclones: 48.38GB
Total Data Reduction Physical Used without snapshots and flexclones: 9.32GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 5.19:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 97.16GB
Total Physical Used in FabricPool Performance Tier: 9.53GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 10.19:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 48.42GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 9.47GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 5.11:1
                Logical Space Used for All Volumes: 48.38GB
               Physical Space Used for All Volumes: 48.38GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 48.88GB
              Physical Space Used by the Aggregate: 9.38GB
           Space Saved by Aggregate Data Reduction: 39.50GB
                 Aggregate Data Reduction SE Ratio: 5.21:1
              Logical Size Used by Snapshot Copies: 48.74GB
             Physical Size Used by Snapshot Copies: 327.2MB
              Snapshot Volume Data Reduction Ratio: 152.54:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 152.54:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 2

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     850.8GB   861.8GB 10.92GB  9.58GB        1%                    39.50GB                     78%                                 1.76GB               0B            39.50GB         78%                     1.76GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             50.28GB         6%
      Aggregate Metadata                            138.6MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    56.27GB         6%

      Total Physical Used                            9.58GB         1%


      Total Provisioned Space                       122.7GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> snapshot show -volume vol1_dst
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm2     vol1_dst
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                                                         327.2MB     1%    1%

::*> snapshot show -volume vol1_dst -instance

                                    Vserver: svm2
                                     Volume: vol1_dst
                                   Snapshot: snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                       Snapshot Data Set ID: 4294968323
                Snapshot Master Data Set ID: 6457977134
                              Creation Time: Sat Jan 20 05:16:48 2024
                              Snapshot Busy: true
                             List of Owners: snapmirror
                              Snapshot Size: 327.2MB
                 Percentage of Total Blocks: 1%
                  Percentage of Used Blocks: 1%
                    Consistency Point Count: 155
                                    Comment: -
                        File System Version: 9.13
                   File System Block Format: 64-bit
                           Physical Snap ID: 1
                            Logical Snap ID: 1
                      Database Record Owner: -
                              Snapshot Tags: SMCreated=snapmirror,
                                             SMDeleteMe=snapmirror
                              Instance UUID: 966e20c5-eee1-456f-bdbd-e3f44f1761c1
                               Version UUID: 06ad125a-3a96-432a-ab53-b190bce521c1
                            7-Mode Snapshot: false
            Label for SnapMirror Operations: -
                             Snapshot State: -
                       Constituent Snapshot: false
                                       Node: FsxId05f23d527aa7ad7a7-01
                     AFS Size from Snapshot: 48.74GB
          Compression Savings from Snapshot: 0B
                Dedup Savings from Snapshot: 0B
             VBN Zero Savings from Snapshot: 0B
Reserved (holes and overwrites) in Snapshot: 0B
                      Snapshot Logical Used: 48.74GB
         Performance Metadata from Snapshot: 1.87MB
                   Snapshot Inofile Version: 4
                                Expiry Time: -
                           Compression Type: none
                       SnapLock Expiry Time: -
                        Application IO Size: -
           Is Qtree Caching Support Enabled: false
                      Compression Algorithm: lzopro
            Snapshot Created for Conversion: false

Snapshotの論理データサイズが48.74GBのところaggr show-efficiencyTotal Physical Usedが9.38GBであることから、Inactive data compressionのデータ削減効果を維持した状態でSnapMirrorで転送できることが分かります。

転送元にファイルを追加

SnapMirrorの初期転送で転送されたデータサイズはsnapmirror showでは確認できません。

せっかくなので、SnapMirrorの差分転送を行いsnapmirror showからもInactive data compressionのデータ削減効果を維持した状態でSnapMirrorで転送できることを確認します。

下準備としてSnapMirrorの転送元に48GiBのファイルを追加します。

$ yes \
  $(base64 /dev/urandom -w 0 \
    | head -c 1K
  ) \
  | tr -d '\n' \
  | sudo dd of=/mnt/fsxn/vol1/1KB_random_pattern_text_block_48GiB_2 bs=4M count=12288 iflag=fullblock
12288+0 records in
12288+0 records out
51539607552 bytes (52 GB, 48 GiB) copied, 351.968 s, 146 MB/s

$ ls -l /mnt/fsxn/vol1
total 101059616
-rw-r--r--. 1 root root 51539607552 Jan 20 04:38 1KB_random_pattern_text_block_48GiB
-rw-r--r--. 1 root root 51539607552 Jan 20 05:29 1KB_random_pattern_text_block_48GiB_2

$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0a7c58a5f3a47283c.fs-012f5aba611482f32.fsx.us-east-1.amazonaws.com:/vol1 nfs4  122G   97G   25G  80% /mnt/fsxn/vol1

ファイル追加後のFSxN 1のStorage Efficiency、ボリューム、aggregate、Snapshotの情報は以下のとおりです。

::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume state   progress           last-op-begin            last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ------------------ ------------------------ ------------------------ ------------ --------------- -------------- -----------------
svm     vol1   Enabled 24768 KB (0%) Done Sat Jan 20 04:02:39 2024 Sat Jan 20 04:02:39 2024 0B           26%             480.0MB        96.89GB

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent
vserver volume size  available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   128GB 24.73GB   128GB           121.6GB 96.87GB 79%          24.22MB            0%                         4KB                 96.86GB       76%   96.89GB      80%                  -                 96.89GB             0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           96.86GB      11%
             Footprint in Performance Tier            97.11GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        644.8MB       0%
      Deduplication Metadata                          107.5MB       0%
           Temporary Deduplication                    107.5MB       0%
      Delayed Frees                                   255.8MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 97.85GB      11%

      Footprint Data Reduction                        93.01GB      10%
           Auto Adaptive Compression                  93.01GB      10%
      Effective Total Footprint                        4.83GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId012f5aba611482f32-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 144.3GB
                               Total Physical Used: 54.37GB
                    Total Storage Efficiency Ratio: 2.65:1
Total Data Reduction Logical Used Without Snapshots: 96.05GB
Total Data Reduction Physical Used Without Snapshots: 54.37GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.77:1
Total Data Reduction Logical Used without snapshots and flexclones: 96.05GB
Total Data Reduction Physical Used without snapshots and flexclones: 54.37GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.77:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 145.1GB
Total Physical Used in FabricPool Performance Tier: 55.48GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.62:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 96.89GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 55.48GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.75:1
                Logical Space Used for All Volumes: 96.05GB
               Physical Space Used for All Volumes: 96.03GB
               Space Saved by Volume Deduplication: 24.22MB
Space Saved by Volume Deduplication and pattern detection: 24.22MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 100.3GB
              Physical Space Used by the Aggregate: 54.37GB
           Space Saved by Aggregate Data Reduction: 45.94GB
                 Aggregate Data Reduction SE Ratio: 1.85:1
              Logical Size Used by Snapshot Copies: 48.21GB
             Physical Size Used by Snapshot Copies: 476KB
              Snapshot Volume Data Reduction Ratio: 106202.52:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 106202.52:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     798.4GB   861.8GB 63.34GB  62.02GB       7%                    45.94GB                     42%                                 2.05GB               0B            45.94GB         42%                     2.05GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             98.86GB        11%
      Aggregate Metadata                            10.42GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    108.7GB        12%

      Total Physical Used                           62.02GB         7%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> snapshot show -volume vol1
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                                                           204KB     0%    0%

48GiBと大きめなファイルを作成したため、ポストプロセス重複排除(Storage Efficiency)が実行されていますね。

Storage Efficiencyを停止します。

::*> volume efficiency stop -volume vol1
The efficiency operation for volume "vol1" of Vserver "svm" is being stopped.

Storage Efficiency停止後のFSxN 2のStorage Efficiency、ボリューム、aggregate、Snapshotの情報は以下のとおりです。

::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume state   progress          last-op-begin            last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:00:37 Sat Jan 20 05:25:41 2024 Sat Jan 20 05:34:41 2024 0B           26%             480MB          96.89GB

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent
vserver volume size  available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   128GB 24.84GB   128GB           121.6GB 96.76GB 79%          126.4MB            0%                         4KB                 96.76GB       76%   96.89GB      80%                  -                 96.89GB             0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           96.76GB      11%
             Footprint in Performance Tier            97.12GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        644.8MB       0%
      Deduplication Metadata                          107.5MB       0%
           Temporary Deduplication                    107.5MB       0%
      Delayed Frees                                   363.7MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 97.85GB      11%

      Footprint Data Reduction                        93.02GB      10%
           Auto Adaptive Compression                  93.02GB      10%
      Effective Total Footprint                        4.83GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId012f5aba611482f32-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 144.3GB
                               Total Physical Used: 54.19GB
                    Total Storage Efficiency Ratio: 2.66:1
Total Data Reduction Logical Used Without Snapshots: 96.05GB
Total Data Reduction Physical Used Without Snapshots: 54.18GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.77:1
Total Data Reduction Logical Used without snapshots and flexclones: 96.05GB
Total Data Reduction Physical Used without snapshots and flexclones: 54.18GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.77:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 145.1GB
Total Physical Used in FabricPool Performance Tier: 55.30GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.62:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 96.89GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 55.29GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.75:1
                Logical Space Used for All Volumes: 96.05GB
               Physical Space Used for All Volumes: 95.93GB
               Space Saved by Volume Deduplication: 126.4MB
Space Saved by Volume Deduplication and pattern detection: 126.4MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 100.1GB
              Physical Space Used by the Aggregate: 54.19GB
           Space Saved by Aggregate Data Reduction: 45.94GB
                 Aggregate Data Reduction SE Ratio: 1.85:1
              Logical Size Used by Snapshot Copies: 48.21GB
             Physical Size Used by Snapshot Copies: 480KB
              Snapshot Volume Data Reduction Ratio: 105317.50:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 105317.50:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     798.4GB   861.8GB 63.36GB  61.91GB       7%                    45.94GB                     42%                                 2.05GB               0B            45.94GB         42%                     2.05GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             98.86GB        11%
      Aggregate Metadata                            10.44GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    108.7GB        12%

      Total Physical Used                           61.91GB         7%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> snapshot show -volume vol1
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                                                           208KB     0%    0%

重複排除で126.4MB削減されたようです。あまり大きな影響ではないので、このまま検証を進めます。

転送元でInactive data compressionを実行

転送元でInactive data compressionを実行します。

「Snapshot取得後にInactive data compressionを実行したデータをSnapMirrorした場合も圧縮効果を維持できるか」という観点も確認したいので、Snapshotを事前に取得します。

「Snapshot取得後にInactive data compressionを実行しても物理使用量が減少するのか」については以下記事で検証しています。

::*> snapshot create -vserver svm -volume vol1 -snapshot test.2024-01-20_0537 -snapmirror-label test

::*> snapshot show -volume vol1
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                                                           212KB     0%    0%
                  test.2024-01-20_0537                     160KB     0%    0%
2 entries were displayed.

Inactive data compressionを実行します。

::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0
Inactive data compression scan started on volume "vol1" in Vserver "svm"

::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 0%
                                                  Phase1 L1s Processed: 7922
                                                    Phase1 Lns Skipped:
                                                                        L1:     0
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 0
                                               Phase2 Blocks Processed: 0
                                     Number of Cold Blocks Encountered: 602800
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 597336
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 3082
             Time since Last Inactive Data Compression Scan ended(sec): 3069
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 3069
                           Average time for Cold Data Compression(sec): 13
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 91%
                                                  Phase1 L1s Processed: 89876
                                                    Phase1 Lns Skipped:
                                                                        L1:     0
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 33013824
                                               Phase2 Blocks Processed: 30185472
                                     Number of Cold Blocks Encountered: 12600336
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 12560240
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 3268
             Time since Last Inactive Data Compression Scan ended(sec): 3255
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 3255
                           Average time for Cold Data Compression(sec): 13
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 12600336
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 12560240
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 14
             Time since Last Inactive Data Compression Scan ended(sec): 4
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 4
                           Average time for Cold Data Compression(sec): 12
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%

Inactive data compression実行後のFSxN 1のStorage Efficiency、ボリューム、aggregate、Snapshotの情報は以下のとおりです。

::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume state   progress          last-op-begin            last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:08:00 Sat Jan 20 05:25:41 2024 Sat Jan 20 05:34:41 2024 0B           26%             480MB          96.89GB

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent
vserver volume size  available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   128GB 24.83GB   128GB           121.6GB 96.77GB 79%          126.4MB            0%                         4KB                 97.02GB       76%   96.89GB      80%                  -                 96.89GB             0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           97.02GB      11%
             Footprint in Performance Tier            97.40GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        644.8MB       0%
      Deduplication Metadata                          107.5MB       0%
           Temporary Deduplication                    107.5MB       0%
      Delayed Frees                                   390.4MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 98.13GB      11%

      Footprint Data Reduction                        93.18GB      10%
           Auto Adaptive Compression                  93.18GB      10%
      Effective Total Footprint                        4.96GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId012f5aba611482f32-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 240.7GB
                               Total Physical Used: 13.27GB
                    Total Storage Efficiency Ratio: 18.15:1
Total Data Reduction Logical Used Without Snapshots: 95.61GB
Total Data Reduction Physical Used Without Snapshots: 13.23GB
Total Data Reduction Efficiency Ratio Without Snapshots: 7.22:1
Total Data Reduction Logical Used without snapshots and flexclones: 95.61GB
Total Data Reduction Physical Used without snapshots and flexclones: 13.23GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 7.22:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 242.0GB
Total Physical Used in FabricPool Performance Tier: 14.83GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 16.32:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 96.89GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 14.79GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 6.55:1
                Logical Space Used for All Volumes: 95.61GB
               Physical Space Used for All Volumes: 95.49GB
               Space Saved by Volume Deduplication: 126.4MB
Space Saved by Volume Deduplication and pattern detection: 126.4MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 105.0GB
              Physical Space Used by the Aggregate: 13.27GB
           Space Saved by Aggregate Data Reduction: 91.77GB
                 Aggregate Data Reduction SE Ratio: 7.92:1
              Logical Size Used by Snapshot Copies: 145.1GB
             Physical Size Used by Snapshot Copies: 256.2MB
              Snapshot Volume Data Reduction Ratio: 579.97:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 579.97:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     843.7GB   861.8GB 18.08GB  16.64GB       2%                    91.77GB                     84%                                 4.09GB               0B            91.77GB         84%                     4.09GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             99.14GB        11%
      Aggregate Metadata                            10.71GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    63.44GB         7%

      Total Physical Used                           16.64GB         2%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> snapshot show -volume vol1
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                                                           212KB     0%    0%
                  test.2024-01-20_0537                   255.7MB     0%    0%
2 entries were displayed.

Total Physical Usedが13.27GBであることから圧縮がしっかり効いていそうです。

もう一度Inactive data compressionを実行した場合、スキャン済みのデータブロックを再度スキャンしてしまうのか気になったので試してみます。

::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0
Inactive data compression scan started on volume "vol1" in Vserver "svm"

::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 0%
                                                  Phase1 L1s Processed: 21583
                                                    Phase1 Lns Skipped:
                                                                        L1:  7317
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 0
                                               Phase2 Blocks Processed: 0
                                     Number of Cold Blocks Encountered: 7056
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 4696
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 609
             Time since Last Inactive Data Compression Scan ended(sec): 599
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 599
                           Average time for Cold Data Compression(sec): 12
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 0%
                                                  Phase1 L1s Processed: 60771
                                                    Phase1 Lns Skipped:
                                                                        L1: 12789
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 0
                                               Phase2 Blocks Processed: 0
                                     Number of Cold Blocks Encountered: 7056
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 4696
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 621
             Time since Last Inactive Data Compression Scan ended(sec): 610
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 610
                           Average time for Cold Data Compression(sec): 12
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 7056
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 4696
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 629
             Time since Last Inactive Data Compression Scan ended(sec): 618
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 618
                           Average time for Cold Data Compression(sec): 12
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%

Number of Cold Blocks Encounteredが7,056と最初にInactive data compressionを実行した時と比べてスキャンしたブロック数が圧倒的に少ないです。

もう一度試します。

::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0
Inactive data compression scan started on volume "vol1" in Vserver "svm"

::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 22%
                                                  Phase1 L1s Processed: 37096
                                                    Phase1 Lns Skipped:
                                                                        L1: 61594
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 33013824
                                               Phase2 Blocks Processed: 7376587
                                     Number of Cold Blocks Encountered: 1920
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 113
             Time since Last Inactive Data Compression Scan ended(sec): 103
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 103
                           Average time for Cold Data Compression(sec): 11
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 2344
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 13
             Time since Last Inactive Data Compression Scan ended(sec): 3
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 3
                           Average time for Cold Data Compression(sec): 11
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%

Number of Cold Blocks Encounteredは2,344です。

また、最初にInactive data compressionを実行した際のNumber of Cold Blocks EncounteredNumber of Compression Done Blocksの差は40,096です。そして、その次に実行した際のNumber of Cold Blocks Encounteredは7,056です。つまりは再実行時に圧縮されなかったデータブロックを再度スキャンするという挙動ではないことが分かります。

まとめると「Inactive data compressionでスキャン済みのデータは、Inactive data compression再実行時に再スキャンされない」ということになります。

CloudWatchメトリクスからもStorage Efficiencyによるデータ削減量StorageEfficiencySavingsとSSDとキャパシティプールストレージの物理データ使用量であるStorageUsedなどの値を確認します。

Inactive data compression実行後のストレージの物理使用量とデータ削減量のCloudWatchメトリクス

Inactive data compressionを実行したタイミングでストレージの物理データ使用量が減っています。

参考までにFSxN 1のStorage Efficiency、ボリューム、aggregate、Snapshotの情報は以下のとおりです。特に面白い動きはありませんでした。

::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume state   progress          last-op-begin            last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:21:34 Sat Jan 20 05:25:41 2024 Sat Jan 20 05:34:41 2024 0B           26%             480MB          96.89GB

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent
vserver volume size  available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   128GB 24.83GB   128GB           121.6GB 96.77GB 79%          126.4MB            0%                         4KB                 97.07GB       76%   96.89GB      80%                  -                 96.89GB             0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           97.07GB      11%
             Footprint in Performance Tier            97.54GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        644.8MB       0%
      Deduplication Metadata                          107.5MB       0%
           Temporary Deduplication                    107.5MB       0%
      Delayed Frees                                   483.3MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 98.28GB      11%

      Footprint Data Reduction                        93.32GB      10%
           Auto Adaptive Compression                  93.32GB      10%
      Effective Total Footprint                        4.96GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId012f5aba611482f32-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 240.6GB
                               Total Physical Used: 13.17GB
                    Total Storage Efficiency Ratio: 18.26:1
Total Data Reduction Logical Used Without Snapshots: 95.47GB
Total Data Reduction Physical Used Without Snapshots: 13.14GB
Total Data Reduction Efficiency Ratio Without Snapshots: 7.27:1
Total Data Reduction Logical Used without snapshots and flexclones: 95.47GB
Total Data Reduction Physical Used without snapshots and flexclones: 13.14GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 7.27:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 242.0GB
Total Physical Used in FabricPool Performance Tier: 14.88GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 16.26:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 96.89GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 14.84GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 6.53:1
                Logical Space Used for All Volumes: 95.47GB
               Physical Space Used for All Volumes: 95.35GB
               Space Saved by Volume Deduplication: 126.4MB
Space Saved by Volume Deduplication and pattern detection: 126.4MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 105.0GB
              Physical Space Used by the Aggregate: 13.17GB
           Space Saved by Aggregate Data Reduction: 91.78GB
                 Aggregate Data Reduction SE Ratio: 7.97:1
              Logical Size Used by Snapshot Copies: 145.1GB
             Physical Size Used by Snapshot Copies: 310.8MB
              Snapshot Volume Data Reduction Ratio: 477.99:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 477.99:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     843.7GB   861.8GB 18.08GB  16.69GB       2%                    91.78GB                     84%                                 4.09GB               0B            91.78GB         84%                     4.09GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             99.29GB        11%
      Aggregate Metadata                            10.56GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    63.43GB         7%

      Total Physical Used                           16.69GB         2%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> snapshot show -volume vol1
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                                                           212KB     0%    0%
                  test.2024-01-20_0537                   310.4MB     0%    0%
2 entries were displayed.

SnapMirrorの差分転送

SnapMirrorの差分転送を行います。

::*> snapmirror update -destination-path svm2:vol1_dst
Operation is queued: snapmirror update of destination "svm2:vol1_dst".

::*> snapmirror show -fields source-path, destination-path, state, status, progress-last-updated, total-progress
source-path destination-path state        status       total-progress progress-last-updated
----------- ---------------- ------------ ------------ -------------- ---------------------
svm:vol1    svm2:vol1_dst    Snapmirrored Transferring 0B             01/20 05:59:02

::*> snapmirror show -fields source-path, destination-path, state, status, progress-last-updated, total-progress
source-path destination-path state        status       total-progress progress-last-updated
----------- ---------------- ------------ ------------ -------------- ---------------------
svm:vol1    svm2:vol1_dst    Snapmirrored Transferring 9.35GB         01/20 06:01:18

::*> snapmirror show -fields source-path, destination-path, state, status, progress-last-updated, total-progress
source-path destination-path state        status total-progress progress-last-updated
----------- ---------------- ------------ ------ -------------- ---------------------
svm:vol1    svm2:vol1_dst    Snapmirrored Idle   -              -

::*> snapmirror show -destination-path svm2:vol1_dst

                                  Source Path: svm:vol1
                               Source Cluster: -
                               Source Vserver: svm
                                Source Volume: vol1
                             Destination Path: svm2:vol1_dst
                          Destination Cluster: -
                          Destination Vserver: svm2
                           Destination Volume: vol1_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm2
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_055902
                    Newest Snapshot Timestamp: 01/20 05:59:02
                            Exported Snapshot: snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_055902
                  Exported Snapshot Timestamp: 01/20 05:59:02
                                      Healthy: true
                              Relationship ID: 1c2fbc48-b753-11ee-9c61-cd186ff61130
                          Source Vserver UUID: 95c966c9-b748-11ee-be16-5542f946a54e
                     Destination Vserver UUID: a809406f-b751-11ee-b8c0-db82abb5ccf5
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 9.35GB
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:2:21
                           Last Transfer From: svm:vol1
                  Last Transfer End Timestamp: 01/20 06:01:23
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:2:58
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId05f23d527aa7ad7a7-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 2
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 20095218242
               Total Transfer Time in Seconds: 271
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

Last Transfer Sizeが9.35GBになっています。snapmirror showからでもInactive data compressionのデータ削減効果を維持した状態でSnapMirrorで転送していることを確認できました。

差分転送後のFSxN 2のStorage Efficiency、ボリューム、aggregate、Snapshotの情報は以下のとおりです。

::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 0
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%

::*> volume efficiency show -volume vol1_dst -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume   state    progress          last-op-begin last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- -------- -------- ----------------- ------------- ------------------------ ------------ --------------- -------------- -----------------
svm2    vol1_dst Disabled Idle for 00:00:00 -             Sat Jan 20 06:02:47 2024 0B           0%              0B             0B

::*> volume show -volume vol1_dst -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available,physical-used, physical-used-percent
vserver volume   size    available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- -------- ------- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm2    vol1_dst 114.0GB 11.56GB   114.0GB         108.3GB 96.71GB 89%          126.4MB            0%                         95.88GB             97.15GB       85%                   96.71GB      89%                  -                 96.71GB             0B                                  0%

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           97.15GB      11%
             Footprint in Performance Tier            97.36GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        580.0MB       0%
      Delayed Frees                                   222.4MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 97.93GB      11%

      Effective Total Footprint                       97.93GB      11%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId05f23d527aa7ad7a7-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 339.0GB
                               Total Physical Used: 18.57GB
                    Total Storage Efficiency Ratio: 18.26:1
Total Data Reduction Logical Used Without Snapshots: 96.76GB
Total Data Reduction Physical Used Without Snapshots: 18.48GB
Total Data Reduction Efficiency Ratio Without Snapshots: 5.23:1
Total Data Reduction Logical Used without snapshots and flexclones: 96.76GB
Total Data Reduction Physical Used without snapshots and flexclones: 18.48GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 5.23:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 339.1GB
Total Physical Used in FabricPool Performance Tier: 18.87GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 17.97:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 96.84GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 18.78GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 5.16:1
                Logical Space Used for All Volumes: 96.76GB
               Physical Space Used for All Volumes: 96.63GB
               Space Saved by Volume Deduplication: 126.4MB
Space Saved by Volume Deduplication and pattern detection: 126.4MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 97.47GB
              Physical Space Used by the Aggregate: 18.57GB
           Space Saved by Aggregate Data Reduction: 78.91GB
                 Aggregate Data Reduction SE Ratio: 5.25:1
              Logical Size Used by Snapshot Copies: 242.3GB
             Physical Size Used by Snapshot Copies: 444.4MB
              Snapshot Volume Data Reduction Ratio: 558.25:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 558.25:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 2

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     841.3GB   861.8GB 20.46GB  18.97GB       2%                    78.91GB                     79%                                 3.50GB               0B            78.91GB         79%                     3.50GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             99.05GB        11%
      Aggregate Metadata                            329.3MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    65.82GB         7%

      Total Physical Used                           18.97GB         2%


      Total Provisioned Space                       179.0GB        20%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> snapshot show -volume vol1_dst
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm2     vol1_dst
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                                                         329.2MB     0%    0%
                  test.2024-01-20_0537                   115.0MB     0%    0%
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_055902
                                                           156KB     0%    0%
3 entries were displayed.

dedupe-space-savedが126.4MBであることから重複排除が維持できていることが分かりますね。

転送先をTiering Policy Allに変更

SnapMirrorの転送先のボリュームのTiering PolicyをAllに変更します。

::*> volume modify -vserver svm2 -volume vol1_dst -tiering-policy all
Volume modify successful on volume vol1_dst of Vserver svm2.

::*> volume show -volume vol1_dst -fields tiering-policy, cloud-retrieval-policy
vserver volume   tiering-policy cloud-retrieval-policy
------- -------- -------------- ----------------------
svm2    vol1_dst all            default

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           97.15GB      11%
             Footprint in Performance Tier            84.51GB      87%
             Footprint in FSxFabricpoolObjectStore
                                                      12.86GB      13%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        580.0MB       0%
      Delayed Frees                                   228.3MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 97.94GB      11%

      Footprint Data Reduction in capacity tier       10.42GB        -
      Effective Total Footprint                       87.52GB      10%

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           97.15GB      11%
             Footprint in Performance Tier             1.53GB       2%
             Footprint in FSxFabricpoolObjectStore
                                                      95.88GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        580.0MB       0%
      Delayed Frees                                   264.1MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 97.97GB      11%

      Footprint Data Reduction in capacity tier       77.66GB        -
      Effective Total Footprint                       20.31GB       2%

95.88GBのデータが階層化されました。

階層化完了後のFSxN 2のStorage Efficiency、ボリューム、aggregate、Snapshotの情報は以下のとおりです。

::*> volume efficiency show -volume vol1_dst -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume   state    progress          last-op-begin last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- -------- -------- ----------------- ------------- ------------------------ ------------ --------------- -------------- -----------------
svm2    vol1_dst Disabled Idle for 00:00:00 -             Sat Jan 20 06:19:28 2024 0B           0%              0B             0B

::*> volume show -volume vol1_dst -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available,physical-used, physical-used-percent
vserver volume   size    available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- -------- ------- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm2    vol1_dst 114.0GB 11.56GB   114.0GB         108.3GB 96.72GB 89%          126.4MB            0%                         95.88GB             97.15GB       85%       96.72GB      89%                  -                 96.72GB             -                                   -

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId05f23d527aa7ad7a7-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 339.0GB
                               Total Physical Used: 17.81GB
                    Total Storage Efficiency Ratio: 19.04:1
Total Data Reduction Logical Used Without Snapshots: 96.76GB
Total Data Reduction Physical Used Without Snapshots: 17.51GB
Total Data Reduction Efficiency Ratio Without Snapshots: 5.53:1
Total Data Reduction Logical Used without snapshots and flexclones: 96.76GB
Total Data Reduction Physical Used without snapshots and flexclones: 17.51GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 5.53:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 5.33GB
Total Physical Used in FabricPool Performance Tier: 188.6KB
Total FabricPool Performance Tier Storage Efficiency Ratio: 29641.38:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.52GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 188.6KB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 8466.74:1
                Logical Space Used for All Volumes: 96.76GB
               Physical Space Used for All Volumes: 96.64GB
               Space Saved by Volume Deduplication: 126.4MB
Space Saved by Volume Deduplication and pattern detection: 126.4MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 25.62GB
              Physical Space Used by the Aggregate: 17.81GB
           Space Saved by Aggregate Data Reduction: 7.81GB
                 Aggregate Data Reduction SE Ratio: 1.44:1
              Logical Size Used by Snapshot Copies: 242.3GB
             Physical Size Used by Snapshot Copies: 444.5MB
              Snapshot Volume Data Reduction Ratio: 558.09:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 558.09:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 2

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     858.0GB   861.8GB 3.75GB   4.33GB        0%                    7.81GB                      68%                                 351.8MB              17.80GB            7.81GB          68%                     351.8MB          -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              3.21GB         0%
      Aggregate Metadata                             8.35GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    49.10GB         5%

      Total Physical Used                            4.33GB         0%


      Total Provisioned Space                       179.0GB        20%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                  96.72GB          -
      Logical Referenced Capacity                   96.25GB          -
      Logical Unreferenced Capacity                 477.0MB          -
      Space Saved by Storage Efficiency             78.92GB          -

      Total Physical Used                           17.80GB          -



2 entries were displayed.

::*> snapshot show -volume vol1_dst
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm2     vol1_dst
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                                                         329.2MB     0%    0%
                  test.2024-01-20_0537                   115.0MB     0%    0%
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_055902
                                                           156KB     0%    0%
3 entries were displayed.

SSDの物理使用量が4.33GBと大幅に減っていますね。キャパシティプールストレージ上のデータサイズは17.80GBのようです。

SSDへの書き戻し

それではSSDへの書き戻しを行います。

::*> volume modify -vserver svm2 -volume vol1_dst -tiering-policy none -cloud-retrieval-policy promote

Warning: The "promote" cloud retrieve policy retrieves all of the cloud data for the specified volume. If the tiering policy is "snapshot-only" then only AFS data is
         retrieved. If the tiering policy is "none" then all data is retrieved. Volume "vol1_dst" in Vserver "svm2" is on a FabricPool, and there are approximately
         102946693120 bytes tiered to the cloud that will be retrieved. Cloud retrieval may take a significant amount of time, and may degrade performance during that time.
         The cloud retrieve operation may also result in data charges by your object store provider.
Do you want to continue? {y|n}: y
Volume modify successful on volume vol1_dst of Vserver svm2.

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           97.15GB      11%
             Footprint in Performance Tier             1.93GB       2%
             Footprint in FSxFabricpoolObjectStore
                                                      95.47GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        580.0MB       0%
      Delayed Frees                                   264.8MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 97.97GB      11%

      Footprint Data Reduction in capacity tier       77.33GB        -
      Effective Total Footprint                       20.64GB       2%

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           97.15GB      11%
             Footprint in Performance Tier             9.02GB       9%
             Footprint in FSxFabricpoolObjectStore
                                                      88.40GB      91%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        580.0MB       0%
      Delayed Frees                                   276.5MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 97.99GB      11%

      Footprint Data Reduction in capacity tier       71.60GB        -
      Effective Total Footprint                       26.39GB       3%

非常にゆっくりですが、徐々に描き戻されています。

SSDの書き戻しを開始してから3時間後のFSxN 2のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show -volume vol1_dst -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume   state    progress          last-op-begin last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- -------- -------- ----------------- ------------- ------------------------ ------------ --------------- -------------- -----------------
svm2    vol1_dst Disabled Idle for 00:00:00 -             Sat Jan 20 10:44:59 2024 0B           0%              0B             0B

::*> volume show -volume vol1_dst -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available,physical-used, physical-used-percent
vserver volume   size    available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- -------- ------- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm2    vol1_dst 114.0GB 11.54GB   114.0GB         108.3GB 96.73GB 89%          126.4MB            0%                         95.88GB             97.16GB       85%       96.73GB      89%                  -                 96.73GB             0B                                  0%

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           97.16GB      11%
             Footprint in Performance Tier            38.96GB      40%
             Footprint in FSxFabricpoolObjectStore
                                                      58.68GB      60%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        580.0MB       0%
      Delayed Frees                                   487.9MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 98.21GB      11%

      Footprint Data Reduction in capacity tier       47.53GB        -
      Effective Total Footprint                       50.68GB       6%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId05f23d527aa7ad7a7-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 339.0GB
                               Total Physical Used: 45.29GB
                    Total Storage Efficiency Ratio: 7.49:1
Total Data Reduction Logical Used Without Snapshots: 96.77GB
Total Data Reduction Physical Used Without Snapshots: 44.91GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.16:1
Total Data Reduction Logical Used without snapshots and flexclones: 96.77GB
Total Data Reduction Physical Used without snapshots and flexclones: 44.91GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.16:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 135.3GB
Total Physical Used in FabricPool Performance Tier: 33.36GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 4.06:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 38.65GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 33.21GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.16:1
                Logical Space Used for All Volumes: 96.77GB
               Physical Space Used for All Volumes: 96.65GB
               Space Saved by Volume Deduplication: 126.4MB
Space Saved by Volume Deduplication and pattern detection: 126.4MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 51.50GB
              Physical Space Used by the Aggregate: 45.29GB
           Space Saved by Aggregate Data Reduction: 6.21GB
                 Aggregate Data Reduction SE Ratio: 1.14:1
              Logical Size Used by Snapshot Copies: 242.3GB
             Physical Size Used by Snapshot Copies: 445.1MB
              Snapshot Volume Data Reduction Ratio: 557.41:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 557.41:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 2

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     820.5GB   861.8GB 41.25GB  42.62GB       5%                    6.21GB                      13%                                 279.7MB              12.16GB            6.21GB          13%                     279.7MB          -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             40.65GB         4%
      Aggregate Metadata                             6.82GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    86.61GB        10%

      Total Physical Used                           42.62GB         5%


      Total Provisioned Space                       179.0GB        20%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                  66.12GB          -
      Logical Referenced Capacity                   65.78GB          -
      Logical Unreferenced Capacity                 351.7MB          -
      Space Saved by Storage Efficiency             53.96GB          -

      Total Physical Used                           12.16GB          -



2 entries were displayed.

SSDの物理使用量は42.62GBとなりました。キャパシティプールストレージへ階層化を行う前よりも増えてしまっています。

CloudWatchメトリクスから以下値を確認します。

  • SSDとキャパシティプールストレージの物理データ使用量 : StorageUsed
  • Storage Efficiencyによるデータ削減量 : StorageEfficiencySavings
  • SSDの物理データ使用量 : All SSD StorageUsed
  • キャパシティプールストレージの物理データ使用量 : All StandardCapacityPool StorageUsed

SSD書き戻し中のSnapMirrorの転送先のストレージメトリクス3

SSDの書き戻しが進むにつれてデータ削減量が減っていることが分かります。

また、SSDの物理データ使用量の増加速度とキャパシティプールストレージの物理データ使用量の減少速度が一致していません。

これはSSDに書き戻す際にInactive daat compressionによるデータ削減効果を維持できていなさそうです。

SSDの書き戻しが完了するまで待ちます。

CloudWatchメトリクスは以下のようになっていました。

SSD書き戻し完了後のSnapMirrorの転送先のストレージメトリクス

階層化前と比較すると一目瞭然です。

SSDに書き戻されたデータはInactive daat compressionによるデータ削減効果を維持できていないことが分かります。

FSxN 2のStorage Efficiency、ボリューム、aggregate、Snapshotの情報は以下のとおりです。

::*> volume efficiency show -volume vol1_dst -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume   state    progress          last-op-begin last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- -------- -------- ----------------- ------------- ------------------------ ------------ --------------- -------------- -----------------
svm2    vol1_dst Disabled Idle for 00:00:00 -             Sun Jan 21 01:22:58 2024 0B           0%              0B             0B

::*> volume show -volume vol1_dst -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent
vserver volume   size    available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- -------- ------- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm2    vol1_dst 114.0GB 11.50GB   114.0GB         108.3GB 96.77GB 89%          126.4MB            0%                         95.88GB             97.96GB       86%                   96.77GB  89%                  -                 96.77GB             0B                                  0%

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           97.96GB      11%
             Footprint in Performance Tier            99.06GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        580.0MB       0%
      Delayed Frees                                    1.10GB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 99.63GB      11%

      Effective Total Footprint                       99.63GB      11%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId05f23d527aa7ad7a7-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 339.1GB
                               Total Physical Used: 92.08GB
                    Total Storage Efficiency Ratio: 3.68:1
Total Data Reduction Logical Used Without Snapshots: 96.81GB
Total Data Reduction Physical Used Without Snapshots: 90.96GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.06:1
Total Data Reduction Logical Used without snapshots and flexclones: 96.81GB
Total Data Reduction Physical Used without snapshots and flexclones: 90.96GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.06:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 339.2GB
Total Physical Used in FabricPool Performance Tier: 92.23GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 3.68:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 96.89GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 91.12GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.06:1
                Logical Space Used for All Volumes: 96.81GB
               Physical Space Used for All Volumes: 96.69GB
               Space Saved by Volume Deduplication: 126.4MB
Space Saved by Volume Deduplication and pattern detection: 126.4MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 98.29GB
              Physical Space Used by the Aggregate: 92.08GB
           Space Saved by Aggregate Data Reduction: 6.21GB
                 Aggregate Data Reduction SE Ratio: 1.07:1
              Logical Size Used by Snapshot Copies: 242.3GB
             Physical Size Used by Snapshot Copies: 1.19GB
              Snapshot Volume Data Reduction Ratio: 202.99:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 202.99:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 2

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     760.3GB   861.8GB 101.4GB  106.0GB       12%                   6.21GB                      6%                                  279.7MB              0B                           6.21GB          6%                      279.7MB          -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             100.7GB        11%
      Aggregate Metadata                             6.91GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    146.8GB        16%

      Total Physical Used                           106.0GB        12%


      Total Provisioned Space                       179.0GB        20%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> snapshot show -volume vol1_dst
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm2     vol1_dst
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                                                         329.2MB     0%    0%
                  test.2024-01-20_0537                   115.0MB     0%    0%
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_055902
                                                         776.8MB     1%    1%
3 entries were displayed.

階層化前は78.91GBだったSpace Saved by Aggregate Data Reductionが6.21GBになってしまっています。

転送先ボリュームでInactive data compressionを実行

転送先ボリュームでInactive data compressionを実行した時の挙動も確認しておきます。

::*> volume efficiency inactive-data-compression start -volume vol1_dst -inactive-days 0
Inactive data compression scan started on volume "vol1_dst" in Vserver "svm2"

::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 0%
                                                  Phase1 L1s Processed: 73772
                                                    Phase1 Lns Skipped:
                                                                        L1:     0
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 0
                                               Phase2 Blocks Processed: 0
                                     Number of Cold Blocks Encountered: 18846720
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 18785536
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 0
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 25229752
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 25152384
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 31
             Time since Last Inactive Data Compression Scan ended(sec): 14
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 14
                           Average time for Cold Data Compression(sec): 17
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%

13分ほどで完了しました。

Inactive data compression実行完了後のFSxN 2のStorage Efficiency、ボリューム、aggregate、Snapshotの情報は以下のとおりです。

::*> volume efficiency show -volume vol1_dst -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume   state    progress          last-op-begin last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- -------- -------- ----------------- ------------- ------------------------ ------------ --------------- -------------- -----------------
svm2    vol1_dst Disabled Idle for 00:00:00 -             Sun Jan 21 01:56:49 2024 0B           0%              0B             0B

::*> volume show -volume vol1_dst -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent
vserver volume   size    available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- -------- ------- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm2    vol1_dst 114.0GB 11.50GB   114.0GB         108.3GB 96.77GB 89%          126.4MB            0%                         95.88GB             97.96GB       86%                   96.77GB  89%                  -                 96.77GB             0B                                  0%

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           97.96GB      11%
             Footprint in Performance Tier            98.99GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        580.0MB       0%
      Delayed Frees                                    1.03GB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 99.56GB      11%

      Footprint Data Reduction                        94.82GB      10%
           Auto Adaptive Compression                  94.82GB      10%
      Effective Total Footprint                        4.74GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId05f23d527aa7ad7a7-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 337.6GB
                               Total Physical Used: 14.31GB
                    Total Storage Efficiency Ratio: 23.59:1
Total Data Reduction Logical Used Without Snapshots: 95.28GB
Total Data Reduction Physical Used Without Snapshots: 14.15GB
Total Data Reduction Efficiency Ratio Without Snapshots: 6.73:1
Total Data Reduction Logical Used without snapshots and flexclones: 95.28GB
Total Data Reduction Physical Used without snapshots and flexclones: 14.15GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 6.73:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 339.2GB
Total Physical Used in FabricPool Performance Tier: 16.02GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 21.18:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 96.89GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 15.86GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 6.11:1
                Logical Space Used for All Volumes: 95.28GB
               Physical Space Used for All Volumes: 95.15GB
               Space Saved by Volume Deduplication: 126.4MB
Space Saved by Volume Deduplication and pattern detection: 126.4MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 106.1GB
              Physical Space Used by the Aggregate: 14.31GB
           Space Saved by Aggregate Data Reduction: 91.77GB
                 Aggregate Data Reduction SE Ratio: 7.41:1
              Logical Size Used by Snapshot Copies: 242.3GB
             Physical Size Used by Snapshot Copies: 1.19GB
              Snapshot Volume Data Reduction Ratio: 202.99:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 202.99:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 2

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     841.9GB   861.8GB 19.91GB  18.48GB       2%                    91.77GB                     82%                                 4.10GB               0B                           91.77GB         82%                     4.10GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             100.7GB        11%
      Aggregate Metadata                            11.00GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    65.26GB         7%

      Total Physical Used                           18.48GB         2%


      Total Provisioned Space                       179.0GB        20%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> snapshot show -volume vol1_dst
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm2     vol1_dst
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_051647
                                                         329.2MB     0%    0%
                  test.2024-01-20_0537                   115.0MB     0%    0%
                  snapmirror.a809406f-b751-11ee-b8c0-db82abb5ccf5_2163009838.2024-01-20_055902
                                                         776.8MB     1%    1%
3 entries were displayed.

SSDの物理データ使用量がキャパシティプールストレージに階層化する前の水準まで戻りました。

CloudWatchメトリクスからも確認します。

SnapMirrorの転送先でInactive data compressionを実行した後のSnapMirrorの転送先のストレージメトリクス

ちょっと分かりづらいですね。

Inactive data compressionを実行した時間帯にフォーカスします。

SnapMirrorの転送先でInactive data compressionを実行した後のSnapMirrorの転送先のストレージメトリクス2

Inactive data compressionを実行することでSSDの物理データ使用量が削減されていることが分かります。

SnapMirrorの転送先がTiering Policy Allの場合においても転送元でInactive data compressionを「わざわざ」かけるメリットはある

検証のポイントであった以下2点の結果は以下になります。

  1. Inactive data compressionのデータ削減効果を維持した状態でSnapMirrorで転送できるか
    • -> 維持した状態で転送できる
  2. 転送後、キャパシティプールストレージに階層化されたデータをSSDに書き戻す際に圧縮した状態で書き戻せられるか
    • -> 圧縮した状態で書き戻せられない

Tiering PolicyはAllであっても一度SSDに書き込まれます。そのため、SnapMirrorでテータ転送する際は転送先のSSDを圧迫しないように余裕を持ってSSDのサイズをプロビジョニングしたり、帯域制御を操作したりすることが必要になります。

データ削減効果を維持した状態で転送できることにより、キャパシティプールストレージに階層化されるまでに使用するSSDの物理データ使用量の削減に繋がります。結果として上述の「余裕を持ってSSDのサイズをプロビジョニングする量」の削減に繋がります。

また、SnapMirrorの転送元でInactive data compressionがかかることで転送量が減少します。これにより、転送時間と転送量にかかる料金を抑えることが可能というメリットもあります。

ただし、以下記事で検証しているとおり、Inactive data compressionは1つのファイルシステムに同時に1ボリュームしか実行できません。

そのため、トータルの移行作業の時間削減にプラスの影響があるかは要検討です。

大量のデータと大量のボリュームがある場合、手動でInactive data compressionを実行するオペレーションは大変だと思います。各ボリュームに対してvolume efficiency inactive-data-compression modifyで以下設定を1に変更した状態にしておいて、日次のバックグラウンド処理に任せるのが良いでしょう。

[-threshold-days ] - Inactive data compression scan threshold days value
Threshold days value for inactive data compression scan.

[-threshold-days-min ] - Inactive data compression scan threshold minimum allowed value.
Minimum allowed value in threshold days for inactive data compression scan.

[-threshold-days-max ] - Inactive data compression scan threshold maximum allowed value.
Maximum allowed value in threshold days for inactive data compression scan.

volume efficiency inactive-data-compression modify

こうすることで最後にアクセスされて1日以上経過したデータブロックは圧縮されます。

以上の理由から、「SnapMirrorの転送先がTiering Policy Allの場合においても転送元でInactive data compressionを『わざわざ』かけるメリットはある」と考えています。

一方で、圧縮した状態で書き戻せられないのは気になります。

FSxNに移行する際にはSSDのプロビジョニングサイズ削減のため、以下のようにTiering Policyを設定することが多いと認識しています。

  • 移行時はTiering Policy All
  • 移行完了後にTiering Policy Auto

こうすることで、以下のように頻繁にアクセスされるデータ分のみのSSDをプロビジョニングすることに繋がります。

  • あまりアクセスされないデータについては安価なキャパシティプールストレージにそのまま
  • 頻繁にアクセスするデータについては高速なSSDに書き戻し

本検証で確認したとおり、SSDに書き戻す際には圧縮効果が失われてしまいます。そのため、SSDに書き戻されると、キャパシティプールストレージ上での物理データ使用量以上のSSDの物理データ使用量となってしまいます。

SSDに書き戻されたデータに対してInactive data compressionがかかるように、忘れずにInactive data compressionの設定をしておきましょう。

この記事が誰かの助けになれば幸いです。

以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!

Share this article

facebook logohatena logotwitter logo

© Classmethod, Inc. All rights reserved.